26 research outputs found

    ARTICLE NO. PC971367 A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    Get PDF
    Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/ MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compile

    Contribution of Cystine-Glutamate Antiporters to the Psychotomimetic Effects of Phencyclidine

    Get PDF
    Altered glutamate signaling contributes to a myriad of neural disorders, including schizophrenia. While synaptic levels are intensely studied, nonvesicular release mechanisms, including cystine–glutamate exchange, maintain high steady-state glutamate levels in the extrasynaptic space. The existence of extrasynaptic receptors, including metabotropic group II glutamate receptors (mGluR), pose nonvesicular release mechanisms as unrecognized targets capable of contributing to pathological glutamate signaling. We tested the hypothesis that activation of cystine–glutamate antiporters using the cysteine prodrug N-acetylcysteine would blunt psychotomimetic effects in the rodent phencyclidine (PCP) model of schizophrenia. First, we demonstrate that PCP elevates extracellular glutamate in the prefrontal cortex, an effect that is blocked by N-acetylcysteine pretreatment. To determine the relevance of the above finding, we assessed social interaction and found that N-acetylcysteine reverses social withdrawal produced by repeated PCP. In a separate paradigm, acute PCP resulted in working memory deficits assessed using a discrete trial t-maze task, and this effect was also reversed by N-acetylcysteine pretreatment. The capacity of N-acetylcysteine to restore working memory was blocked by infusion of the cystine–glutamate antiporter inhibitor (S)-4-carboxyphenylglycine into the prefrontal cortex or systemic administration of the group II mGluR antagonist LY341495 indicating that the effects of N-acetylcysteine requires cystine–glutamate exchange and group II mGluR activation. Finally, protein levels from postmortem tissue obtained from schizophrenic patients revealed significant changes in the level of xCT, the active subunit for cystine–glutamate exchange, in the dorsolateral prefrontal cortex. These data advance cystine–glutamate antiporters as novel targets capable of reversing the psychotomimetic effects of PCP

    The A-Current Modulates Learning via NMDA Receptors Containing the NR2B Subunit

    Get PDF
    Synaptic plasticity involves short- and long-term events, although the molecular mechanisms that underlie these processes are not fully understood. The transient A-type K+ current (IA) controls the excitability of the dendrites from CA1 pyramidal neurons by regulating the back-propagation of action potentials and shaping synaptic input. Here, we have studied how decreases in IA affect cognitive processes and synaptic plasticity. Using wild-type mice treated with 4-AP, an IA inhibitor, and mice lacking the DREAM protein, a transcriptional repressor and modulator of the IA, we demonstrate that impairment of IA decreases the stimulation threshold for learning and the induction of early-LTP. Hippocampal electrical recordings in both models revealed alterations in basal electrical oscillatory properties toward low-theta frequencies. In addition, we demonstrated that the facilitated learning induced by decreased IA requires the activation of NMDA receptors containing the NR2B subunit. Together, these findings point to a balance between the IA and the activity of NR2B-containing NMDA receptors in the regulation of learning

    MPI as a Coordination Layer for Communicating HPF Tasks

    Get PDF
    Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, we propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; we show how these ambiguities can be resolved by describing one possible HPF binding for MPI. We then present the design of a li..

    A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    Get PDF
    The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents..

    ARTICLE NO. PC971367 A Library-Based Approach to Task Parallelism in a Data-Parallel Language

    No full text
    Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/ MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compile
    corecore